![]() Encoding an image method
专利摘要:
METHOD OF ENCODING A MOTION VECTOR, APPARATUS FOR ENCODING A MOTION VECTOR, METHOD OF DECODING A MOTION VECTOR, APPARATUS FOR DECODING A MOTION VECTOR, AND COMPUTER-READABLE RECORDING MEDIA Methods and apparatus for encoding are provided and decoding a motion vector. The motion vector encoding method includes: selecting, as a way of encoding information about a motion vector predictor of the current block, a first mode in which information indicating that the motion vector predictor is from among at least one vector predictor is encoded or a second mode in which information indicating the generation of the motion vector predictor based on blocks or pixels included in a previously encoded area adjacent to the current block is encoded; determining the motion vector predictor of the current block according to the selected mode and encoding the information about the motion vector predictor of the current block; and encoding a difference vector between the current block motion vector and the current block motion vector predictor. 公开号:BR112012001514B1 申请号:R112012001514-1 申请日:2010-08-13 公开日:2022-02-01 发明作者:Tammy Lee;Woo-jin Han;Kyo-hyuk Lee 申请人:Samsung Electronics Co., Ltd; IPC主号:
专利说明:
Technical Field [0001] Apparatus and methods consistent with exemplary modalities refer to a method and apparatus for encoding a motion vector, and more particularly, to a method and apparatus for encoding a motion vector predictor of a current block. Prior Art [0002] A codec such as Advanced Video Coding (AVC) from the Motion Picture Expert Group (MPEG)-4 H.264/MPEG-4 uses motion vectors from previously encoded blocks adjacent to a current block to predict a motion vector of the current block. That is, an average value of motion vectors from previously encoded blocks adjacent to the left, upper, and upper right sides of a current block is used as a motion vector predictor of the current block. Description of the InventionProblem Solution [0003] Exemplary embodiments provide a method and apparatus for encoding and decoding a motion vector, and a computer read recording means storing a computer readout program to perform the method. Advantageous Effects of the Invention [0004] According to the present application, a motion vector is efficiently encoded based on a mor exact motion vector predictor. Brief Description of Figures [0005] The above and/or other aspects will become more evident by describing in detail the exemplary embodiments with reference to the accompanying Figures, in which: Figure 1 is a block diagram of an apparatus for encoding an image in accordance with an exemplary embodiment. Figure 2 is a block diagram of an apparatus for decoding an image according to an exemplary embodiment. Figure 3 illustrates hierarchical encoding units according to an exemplary embodiment. Figure 4 is a block diagram of an image encoder based on a coding unit, according to an exemplary embodiment. Figure 5 is a block diagram of an image decoder based on a coding unit, according to an exemplary embodiment. Figure 6 illustrates a maximum encoding unit, a sub-coding unit and a prediction unit according to an exemplary embodiment. Figure 7 illustrates an encoding unit and a transform unit. rmation, according to an exemplary embodiment. Figures 8A and 8B illustrate ways of dividing an encoding unit, a prediction unit, and a transformation unit, according to an exemplary embodiment. Figure 9 is a block diagram of an apparatus for encoding a motion vector, in accordance with an exemplary embodiment. Figures 10A and 10B illustrate motion vector predictor candidates in an explicit manner, in accordance with an exemplary embodiment. Figures 11A to 11C illustrate motion vector predictor candidates in an explicit mode according to another exemplary embodiment. Figure 12 illustrates a method of generating a motion vector predictor in an implicit mode according to an exemplary embodiment. Figure 13 is a block diagram of an apparatus for decoding a motion vector according to an exemplary embodiment. Figure 14 is a flowchart of a method of encoding a motion vector according to an exemplary embodiment; eFigure 15 is a flowchart of a method of decoding a motion vector according to an exemplary embodiment. Best Way to Carry Out the Invention [0006] According to one aspect of an exemplary embodiment, a method of encoding a motion vector of a current block is provided, the method including: selecting, as a mode of encoding information about a motion vector predictor of the current block, a first mode in which the information indicating the motion vector predictor between at least one motion vector predictor is encoded, or a second mode in which the information indicating the generation of the motion vector predictor based on blocks or pixels included in a previously encoded area adjacent to the current block are encoded; determining the motion vector predictor of the current block according to the selected mode and encoding the information about the motion vector predictor of the current block; and encoding a difference vector between the current block motion vector and the current block motion vector predictor. [0007] Selection of the first or second mode may include selecting the first or second mode based on a depth indicating a degree of decrease from a maximum encoding unit size of a current frame or module to a current block size. [0008] Selection of the first or second mode may include selecting the first or second mode in a unit of a current frame or module including the current block. [0009] Selection of the first or second mode may include selecting the first or second mode with the condition that the current block is encoded in a skip mode. [00010] The at least one motion vector predictor may include a first motion vector from a block adjacent to a left side of the current block, a second motion vector from a block adjacent to an upper side of the current block, and a third motion vector of a block adjacent to an upper-right side of the current block. [00011] The at least one motion vector predictor may additionally include an average value of the first motion vector, the second motion vector, and the third motion vector. [00012] The at least one motion vector predictor may additionally include a motion vector predictor generated based on a motion vector of a block co-located with the current block in a reference frame and a temporal distance between the reference frame and a current frame. [00013] Information indicating generation of motion vector predictor based on blocks or pixels included in a previously encoded area adjacent to current block may be information indicating generation of motion vector predictor of current block based on a average value of a first motion vector of a block adjacent to a left side of the current block, a second motion vector of a block adjacent to an upper side of the current block, and a third motion vector of a block adjacent to a side upper right of the current block. [00014] Information indicating generation of motion vector predictor based on blocks or pixels included in a previously encoded area adjacent to current block may be information indicating generation of motion vector predictor of current block based on a motion vector generated by searching for a reference frame using pixels included in the previously encoded area adjacent to the current block. [00015] According to an aspect of another exemplary embodiment, there is provided an apparatus for encoding a motion vector of a current block, the apparatus including: a predictor which selects, as a mode of encoding information about a vector predictor of the current block, a first mode in which the information indicating the motion vector predictor between at least one motion vector predictor is encoded, or a second mode in which the information indicating the generation of the motion vector predictor based on in blocks or pixels included in a previously encoded area adjacent to the current block is encoded, and which determines the motion vector predictor of the current block based on the selected mode; a first encoder that encodes information about the motion vector predictor of the current block determined based on the selected mode; and a second encoder that encodes a difference vector between a current block motion vector and the current block motion vector predictor. [00016] According to an aspect of another exemplary embodiment, a method of decoding a motion vector of a current block is provided, the method including: decoding information about a motion vector predictor of the current block encoded according to a mode selected between a first and a second mode; decoding a difference vector between the current block motion vector and the current block motion vector predictor; generating the current block motion vector predictor based on the decoded information about the current block motion vector predictor; and restoring the motion vector of the current block based on the motion vector predictor and the difference vector, wherein the first mode is a mode where the information indicating the motion vector predictor between at least one motion vector predictor is motion is encoded and the second mode is a mode in which information indicating the generation of the motion vector predictor based on blocks or pixels included in a previously decoded area adjacent to the current block is encoded. [00017] According to an aspect of another exemplary embodiment, there is provided an apparatus for decoding a motion vector of a current block, the apparatus including: a first decoder which decodes information about a motion vector predictor of the current block encoded according to a mode selected from a first and a second mode; a second decoder that decodes a difference vector between the current block motion vector and the current block motion vector predictor; a predictor that generates the motion vector predictor of the current block based on the decoded information about the motion vector predictor of the current block; and a motion vector restoration unit that restores the motion vector of the current block based on the motion vector predictor and the difference vector, wherein the first mode is a mode in which the information indicating the motion vector predictor is motion between at least one motion vector predictor is encoded and the second mode is a mode in which information indicating the generation of the motion vector predictor based on blocks or pixels included in a previously decoded area adjacent to the current block is encoded . [00018] According to an aspect of another exemplary embodiment, there is provided a computer readable recording medium storing a computer readable program for performing the motion vector encoding method and the motion vector decoding method. movement. Mode for Invention [00019] Exemplary modalities will now be described in more detail with reference to the attached Figures, in which reference numbers refer to the listed elements. Expressions such as "at least one of," when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. In the present specification, an "image" may indicate a still image for a video or a moving image, i.e., the video itself. [00020] Figure 1 is a block diagram of an apparatus 100 for encoding an image in accordance with an exemplary embodiment. Referring to Figure 1, apparatus 100 includes a maximum encoding unit divider 110, an encoding depth determiner 120, a picture data encoder 130, and an encoding information encoder 140. [00021] Maximum encoding unit divider 110 can divide a current frame or module based on a maximum encoding unit which is an encoding unit of a larger size. That is, the maximum encoding unit divider 110 can divide the current frame or modulus to obtain at least one maximum encoding unit. [00022] According to an exemplary embodiment, an encoding unit may be represented using a maximum encoding unit and a depth. As described above, the maximum coding unit indicates a coding unit with the largest size among the coding units of the current frame, and the depth indicates the size of a coding sub-unit obtained hierarchically by decreasing the coding unit. As the depth increases, the encoding unit may decrease from a maximum encoding unit to a minimum encoding unit, where a maximum encoding unit depth is set to a minimum depth and a minimum encoding unit depth is set to a maximum depth. Since the size of the encoding unit decreases from a maximum encoding unit as depth increases, a kth-depth encoding subunit may include a plurality of (k+n)-th-depth encoding subunits (in that ken are integers equal to or greater than 1). [00023] According to an increase in the size of a frame to be encoded, encoding an image in a larger encoding unit can generate a higher image compression rate. However, if a larger encoding unit is fixed, an image may not be efficiently encoded because it has constantly changing image characteristics. [00024] For example, when a flat area such as the sea or sky is encoded, the larger the encoding unit, the greater the possibility of increasing the compression rate. However, when a complex area, such as people or buildings, is encoded, the smaller the encoding unit, the greater the possibility of increasing the compression rate. [00025] Thus, according to an exemplary embodiment, a different maximum image encoding unit and a different maximum depth are set for each frame or module. Since a maximum depth indicates the maximum number of times an encoding unit can shrink, the size of each minimum encoding unit included in a maximum image encoding unit can be variably set according to a maximum depth. [00026] Encoding depth determiner 120 determines a maximum depth. For example, the maximum depth can be determined based on the Rate-Distortion (R-D) costing. Also, the maximum depth can be determined differently for each frame or module or for each unit of maximum encoding. The maximum depth determined is provided to the encoding information encoder 140, and image data according to maximum encoding units is provided to the image data encoder 130. [00027] Maximum depth indicates an encoding unit with the smallest size that can be included in a maximum encoding unit, that is, a minimum encoding unit. In other words, a maximum encoding unit can be divided into different sized encoding subunits according to different depths. This will be described in detail later with reference to Figures 8A and 8B. In addition, different sized coding subunits, which are included in the maximum coding unit, can be predicted or transformed based on different sized processing units. In other words, the apparatus 100 can perform a plurality of processing operations for encoding the image based on processing units of various sizes and shapes. To encode image data, processing operations such as prediction, transformation and entropy encoding are performed, where processing units with the same size can be used for each operation or processing units with different sizes can be used for each operation. . [00028] For example, apparatus 100 may select a processing unit that is different from an encoding unit to predict the encoding unit. When the size of an encoding unit is 2N*2N (where N is a positive integer), processing units for prediction can be 2N*2N, 2N*N, Nx2N, and NxN. In other words, motion prediction can be performed on the basis of a processing unit with a way in which at least one of the height and width of an encoding unit is evenly divided by two. Henceforth, a processing unit that is the basis of prediction is referred to as a prediction unit. [00029] A prediction mode can be at least one of an intra mode, an inter mode and a jump mode, and a specific prediction mode can only be realized by a prediction unit with a specific size or shape. For example, the intra mode can only be realized by prediction units with sizes of 2Nx2N and NxN, which have the shape of a square. Also, the jump mode can only be performed by a prediction unit with a size of 2N*2N. If there are a plurality of prediction units in one coding unit, the prediction mode with the least coding errors can be selected after performing prediction by each prediction unit. [00030] Alternatively, the apparatus 100 can perform frequency transformation on image data based on a processing unit having a size other than an encoding unit. For the frequency transformation in the encoding unit, the frequency transformation can be performed on the basis of a processing unit with a size equal to or smaller than that of the encoding unit. Henceforth, a processing unit which is the basis of frequency transformation is referred to as a transformation unit. The frequency transform can be a Discrete Cosine Transform (DCT) or a Karhunen-Loeve Transform (KLT). [00031] Encoding depth determiner 120 can determine encoding subunits included in a maximum encoding unit using R-D optimization based on a Lagrange multiplier. In other words, the coding depth determiner 120 can determine what form a plurality of coding subunits divided from the maximum coding unit has, wherein the plurality of coding subunits have different sizes according to their depths. The picture data encoder 130 generates a stream of data by encoding the maximum encoding unit based on the division shapes determined by the encoding depth determiner 120. [00032] The encoding information encoder 140 encodes information about an encoding mode of the maximum encoding unit determined by the encoding depth determiner 120. In other words, the encoding information encoder 140 generates a continuous stream of data through encoding information about a division way of the maximum encoding unit, information about the maximum depth, and information about an encoding mode of a encoding subunit for each depth. The encoding mode information of the encoding subunit may include at least one of information about a prediction unit of the encoding subunit, information about a prediction mode for each prediction unit, and information about a transformation unit of the encoding subunit. codification. [00033] Since there are encoding subunits with different sizes for each maximum encoding unit and information about an encoding mode is determined for each encoding subunit, information about at least one encoding mode can be determined for a unit maximum encoding. [00034] The apparatus 100 can generate encoding subunits by equally dividing the height and width of a maximum encoding unit by two according to an increase in depth. That is, when the size of a k-th-depth encoding unit is 2N*2N, the size of a (k+1)-th-depth encoding unit can be NxN. [00035] Thus, the apparatus 100, according to an exemplary embodiment, can determine an optimal division form for each maximum encoding unit based on maximum encoding unit sizes and a maximum depth, taking into account the image characteristics . By variable adjusting the size of a maximum encoding unit considering the image characteristics and encoding an image by dividing a maximum encoding unit into encoding sub-units of different depths, images with different resolutions can be encoded more efficiently. [00036] Figure 2 is a block diagram of an apparatus 200 for decoding an image according to an exemplary embodiment. Referring to Figure 2, apparatus 200 includes an image data acquisition unit 210, an encoding information extractor 220, and an image data decoder 230. [00037] The image data acquisition unit 210 obtains the image data according to maximum encoding units by analyzing a continuous stream of data received by the apparatus 200 and sends the image data to the image data decoder 230. The image data acquisition unit 210 can extract information about a maximum encoding unit of a current frame or module from a header of the current frame or module. In other words, the image data acquisition unit 210 divides the data stream into the maximum encoding unit so that the image data decoder 230 can decode the image data according to maximum encoding units. [00038] The encoding information extractor 220 extracts information about a maximum encoding unit, a maximum depth, a way of dividing the maximum encoding unit, and an encoding mode of encoding subunits by analyzing the stream of data received by apparatus 200. For example, encoding information extractor 220 may extract the information described above from the current frame header. The split form information and the encoding mode information are provided to the picture data decoder 230. [00039] The information about the division form of the maximum encoding unit may include the information about encoding subunits with different sizes according to depths included in the maximum encoding unit, and the information about the encoding mode may include at least one information about a prediction unit according to an encoding subunit, information about a prediction mode, and information about a transformation unit. [00040] The picture data decoder 230 restores the current frame by decoding the picture data of each maximum encoding unit based on the information extracted by the encoding information extractor 220. The picture data decoder 230 can decode the encoding subunits included in a maximum encoding unit based on information about the division form of the maximum encoding unit. A decoding process may include at least one of a prediction process including intra prediction and motion compensation, and an inverse transformation process. [00041] Furthermore, the picture data decoder 230 can perform intra prediction or inter prediction based on the prediction unit information and prediction mode information in order to predict a prediction unit. The picture data decoder 230 can also perform inverse transformation for each coding subunit based on information about the transform unit of an coding subunit. [00042] Figure 3 illustrates hierarchical encoding units according to an exemplary embodiment. With reference to Figure 3, exemplary hierarchical encoding units include encoding units whose sizes are 64x64, 32x32, 16x16, 8x8, and 4x4. Also, encoding units whose sizes are 64x32, 32x64, 32x16, 16x32, 16x8, 8x16, 8x4 and 4x8 can also exist. [00043] In the exemplary modality illustrated in Figure 3, at first, for image data 310 whose resolution is 1920x1080, the size of a maximum encoding unit is set at 64x64 and a maximum depth is set at 2. second time, for 320 image data whose resolution is 1920x1080, the size of a maximum encoding unit is fixed at 64x64 and a maximum depth is fixed at 3. At a third time, for 330 image data whose resolution is 352x288 , the size of a maximum encoding unit is fixed at 16x16 and a maximum depth is fixed at 1. [00044] When the resolution is high or the amount of data is large, a maximum size of an encoding unit can be relatively large to increase a compression ratio and accurately reflect the image characteristics. Thus, for image data 310 and 320 of the first and second moments with higher resolution than the image data 330 of the third moment, 64x64 can be selected as the maximum encoding unit size. [00045] The maximum depth indicates the total number of layers in the hierarchical encoding units. Since the maximum depth of the image data 310 of the first moment is 2, an encoding unit 315 of the image data 310 can include a maximum encoding unit whose longest axis size is 64 and encoding subunits whose axis sizes longer are 32 and 16, according to an increase of one depth. [00046] On the other hand, since the maximum depth of the image data 330 of the third moment is 1, an encoding unit 335 of the image data 330 can include a maximum encoding unit whose longest axis size is 16 and encoding units whose longest axes sizes are 8, according to an increase of one depth. [00047] However, since the maximum depth of the image data 320 of the second moment is 3, an encoding unit 325 of the image data 320 can include a maximum encoding unit whose longest axis size is 64 and subunits encoding whose longest axis sizes are 32, 16, and 8, according to an increase of one depth. Since an image is encoded based on a smaller encoding subunit with increasing depth, exemplary modalities are suitable for encoding an image including more minute scenes. [00048] Figure 4 is a block diagram of an image encoder 400 based on an encoding unit according to an exemplary embodiment. Referring to Figure 4, an intra predictor 410 performs intra prediction on intra mode prediction units in a current frame 405, and a motion estimator 420 and a motion compensator 425 perform inter prediction and motion compensation on prediction units of the inter mode using current frame 405 and a reference frame 495. [00049] Residual values are generated based on the prediction units and output by the intra predictor 410, motion estimator 420 and motion compensator 425. The residual values generated are output as quantized transformation coefficients by passing through a transformer 430 and a quantizer 440. [00050] The quantized transform coefficients are restored to residual values by passing through an inverse quantizer 460 and an inverse transformer 470. The restored residual values are further processed by passing through an unlocking unit 480 and a filtering circuit unit 490, and output as the reference frame 495. The quantized transform coefficients may be output as a continuous stream of data 455 by passing through an entropy encoder 450. [00051] To perform coding based on a coding method according to an exemplary embodiment, components of the image encoder 400, i.e. the intra predictor 410, the motion estimator 420, the motion compensator 425, the transformer 430 , quantizer 440, entropy encoder 450, inverse quantizer 460, inverse transformer 470, unlocking unit 480 and filtering circuit unit 490, perform image encoding processes based on a maximum encoding unit, a depth-coding subunit, a prediction unit and a transformation unit. [00052] Figure 5 is a block diagram of a picture decoder 500 based on an encoding unit according to an exemplary embodiment. Referring to Figure 5, a continuous stream of data 505 passes through an analyzer 510 so that the encoded image data is decoded and encoding information used for decoding is analyzed. The encoded image data is output as inverse quantized data by passing it through an entropy decoder 520 and an inverse quantizer 530, and restored to residual values by passing through an inverse transformer 540. The residual values are restored according to units of encoding, being added to an intra prediction result of an intra predictor 550 or a motion compensation result of a motion compensator 560. The restored coding units are used for the prediction of subsequent coding units or a subsequent frame, passing through an unlocking unit 570 and a filtering circuit unit 580. [00053] To perform decoding based on a decoding method according to an exemplary embodiment, components of the image decoder 500, i.e., analyzer 510, entropy decoder 520, inverse quantizer 530, inverse transformer 540, the intra predictor 550, the motion compensator 560, the unlocking unit 570 and the filtering circuit unit 580, perform image decoding processes based on a maximum encoding unit, a depth encoding subunit, a prediction unit and a transformation unit. [00054] In particular, the intra predictor 550 and the motion compensator 560 determine a prediction unit and a prediction mode in an encoding sub-unit considering a maximum encoding unit and a depth, and the inverse transformer 540 performs inverse transformation considering the size of a processing unit. [00055] Figure 6 illustrates a maximum encoding unit, a encoding sub-unit and a prediction unit according to an exemplary embodiment. [00056] As described above, the encoding apparatus 100 and the decoding apparatus 200, according to one or more exemplary embodiments, use hierarchical encoding units to perform encoding and decoding, taking picture characteristics into account. A maximum encoding unit and maximum depth can be adaptively set according to the image characteristics or variably set according to a user's requirements. [00057] Referring to Figure 6, a hierarchical encoding unit structure 600, according to an exemplary embodiment, illustrates a maximum encoding unit 610 whose height and width are 64 and maximum depth is 4. A depth increases along a vertical axis of the hierarchical coding unit structure 600, and as the depth increases, the heights and widths of the coding subunits 620 to 650 decrease. Prediction units of the maximum coding unit 610 and the coding sub-units 620 to 650 are shown along a horizontal axis of the hierarchical coding unit structure 600. [00058] The maximum encoding unit 610 has a depth of 0 and a size, ie height and width, of 64x64. The depth increases along the vertical axis such that there is a 620 encoding subunit whose size is 32x32 and depth is 1, a 630 encoding subunit whose size is 16x16 and depth is 2, a 640 encoding subunit whose size is 8x8 and depth is 3 and a 650 decoding subunit whose size is 4x4 and depth is 4. The 650 encoding subunit whose size is 4x4 and depth is 4 is a minimum encoding unit. The minimum encoding unit 650 can be divided into prediction units, each of which is smaller than the minimum encoding unit. [00059] In the exemplary embodiment illustrated in Figure 6, examples of a prediction unit are shown along the horizontal axis according to each depth. That is, a prediction unit of maximum encoding unit 610 whose depth is 0 can be a prediction unit whose size is equal to that of encoding unit 610, i.e. 64x64, or a prediction unit 612 whose size is 64x32, a prediction unit 614 whose size is 32x64, or a prediction unit 616 whose size is 32x32, which are smaller in size than the encoding unit 610 whose size is 64x64. [00060] A prediction unit of encoding unit 620 whose depth is 1 and size is 32x32 can be either a prediction unit whose size is equal to that of encoding unit 620, that is, 32x32, or a prediction unit 622 whose size is 32x16, a prediction unit 624 whose size is 16x32, or a prediction unit 626 whose size is 16x16, which are smaller in size than the encoding unit 620 whose size is 32x32. [00061] A prediction unit of encoding unit 630 whose depth is 2 and size is 16x16 can be either a prediction unit whose size is equal to that of encoding unit 630, i.e. 16x16, or a prediction unit 632 whose size is 16x8, a prediction unit 634 whose size is 8x16, or a prediction unit 636 whose size is 8x8, which are smaller in size than the encoding unit 630 whose size is 16x16. [00062] A prediction unit of encoding unit 640 whose depth is 3 and size is 8x8 can be either a prediction unit whose size is equal to that of encoding unit 640, i.e. 8x8, or a prediction unit 642 whose size is 8x4, a prediction unit 644 whose size is 4x8, or a prediction unit 646 whose size is 4x4, which are smaller in size than the encoding unit 640 whose size is 8x8. [00063] Encoding unit 650 whose depth is 4 and size is 4x4 is a minimum encoding unit and an encoding unit of a maximum depth. A prediction unit of encoding unit 650 may be a prediction unit 650 whose size is 4x4, a prediction unit 652 with a size of 4x2, a prediction unit 654 with a size of 2x4, or a prediction unit 656 with a size of a size of 2x2. [00064] Figure 7 illustrates an encoding unit and a transformation unit according to an exemplary embodiment. The encoding apparatus 100 and the decoding apparatus 200, according to one or more exemplary embodiments, perform encoding only with a maximum encoding unit or with encoding sub-units that are equal to or less than the maximum encoding unit and divided from the maximum encoding unit. [00065] In the encoding process, the size of a transformation unit for frequency transformation is selected so that it is not larger than that of a corresponding encoding unit. For example, when a current encoding unit 710 has a size of 64x64, frequency transformation can be performed using a transformation unit 720 with a size of 32x32. [00066] Figures 8A and 8B illustrate ways of dividing an encoding unit, a prediction unit and a transformation unit according to an exemplary embodiment. Figure 8A illustrates an encoding unit and a prediction unit according to an exemplary embodiment. [00067] A left side of Figure 8A shows a form of division selected by an encoding apparatus 100 in accordance with an exemplary embodiment, in order to encode a maximum encoding unit 810. The apparatus 100 divides the maximum encoding unit 810 into various ways, performs encoding, and selects an optimal division form by comparing the encoding results of the various division forms with each other based on the RD cost. While it is ideal that the maximum encoding unit 810 is encoded in this way, the maximum encoding unit 810 can be encoded without dividing the maximum encoding unit 810, as illustrated in Figures 8A and 8B. [00068] Referring to the left side of Figure 8A, the maximum encoding unit 810, whose depth is 0, is encoded by dividing the maximum encoding unit into encoding subunits whose depths are equal to or greater than 1. That is , the maximum coding unit 810 is divided into 4 coding subunits whose depths are 1, and all or some of the coding subunits whose depths are 1 are divided into coding subunits whose depths are 2. [00069] A coding subunit located on an upper right side and a coding subunit located on a lower left side between the coding subunits whose depths are 1, are divided into coding subunits whose depths are equal to or greater than 2. Some of the encoding subunits whose depths are equal to or greater than 2 can be divided into decoding subunits whose depths are equal to or greater than 3. [00070] The right side of Figure 8A shows a way of dividing a prediction unit to the maximum encoding unit810. Referring to the right side of Figure 8A , a prediction unit 860 for the maximum encoding unit 810 may be divided differently from the maximum encoding unit 810. In other words, a prediction unit for each of the encoding subunits may be smaller than a corresponding coding subunit. [00071] For example, a prediction unit for an 854 encoding subunit located on a lower right side between encoding subunits whose depths are 1, may be smaller than the 854 encoding subunit. Also, prediction units for some coding subunits 814, 816, 850 and 852 of coding subunits 814, 816, 818, 828, 850 and 852, whose depths are 2, may be smaller than the coding subunits 814, 816, 850and 852, respectively. Also, prediction units for coding subunits 822, 832, and 848, whose depths are 3, may be smaller than coding subunits 822, 832, and 848, respectively. The prediction units may contain a way in which the respective coding subunits are evenly divided by two in a height or width direction or have a way in which the respective coding subunits are evenly divided by four in both height and width directions. [00072] Figure 8B illustrates a prediction unit and a transformation unit according to an exemplary embodiment. A left side of Figure 8B shows a way of dividing a prediction unit to the maximum encoding unit 810 shown on the right side of Figure 8A, and a right side of Figure 8B shows a way of dividing a transform unit from the encoding unit maximum 810. [00073] Referring to the right side of Figure 8B, a way of dividing a transform unit 870 can be set differently from the prediction unit 860. For example, although a prediction unit for encoding unit 854, whose depth is 1, is selected with a way in which the height of encoding unit 854 is evenly divided by two, a transform unit can be selected with the same size as encoding unit 854. Likewise, although prediction units for coding units 814 and 850, whose depths are 2, are selected with a way in which the height of each of the coding units 814 and 850 is evenly divided by two, a transform unit can be selected with the same size as the original size of each of the coding units 814 and 850. [00074] A transform unit can be selected with a size smaller than a prediction unit. For example, when a prediction unit for encoding unit 852 whose depth is 2 is selected with a shape whereby the width of encoding unit 852 is evenly divided by two, a transform unit may be selected with a shape whereby the encoding unit 852 is evenly divided by four in height and width directions, which is smaller in size than the shape of the prediction unit. [00075] Figure 9 is a block diagram of an apparatus 900 for encoding a motion vector in accordance with an exemplary embodiment. Apparatus 900 for encoding a motion vector may be included in apparatus 100 described above with reference to Figure 1 or in image encoder 400 described above with reference to Figure 4. With reference to Figure 9, the motion vector encoding apparatus movement 900 includes a predictor 910, a first encoder 920 and a second encoder 930. [00076] In order to decode an encoded block using inter-prediction, i.e. inter-frame prediction, information about a motion vector indicating a position difference between a current block and a similar block in a reference frame is used. Thus, information about motion vectors is encoded and inserted into a continuous stream of data in an image encoding process. However, if motion vector information is encoded and entered in this way, an overhead for encoding motion vector information increases, thus decreasing an image data compression rate. [00077] Therefore, in an image encoding process, information about a motion vector is compressed by predicting a motion vector of a current block, encoding only a differential vector between a motion vector predictor generated as a prediction result and an original motion vector, and inserting the encoded differential vector into a continuous stream of data. Figure 9 illustrates an apparatus 900 for encoding a motion vector using such a motion vector predictor. [00078] Referring to Figure 9, the predictor 910 determines whether a motion vector of a current block is prediction encoded based on an explicit mode or an implicit mode. [00079] As described above, a codec, such as 4 H.264/MPEG-4 AVC MPEG, uses motion vectors from previously encoded blocks adjacent to a current block to predict a motion vector of the current block. That is, an average value of motion vectors from previously encoded blocks adjacent to the left, upper, and upper right sides of the current block is used as a motion vector predictor of the current block. Since motion vectors of all blocks encoded using inter prediction are predicted using the same method, information about a motion vector predictor should not be encoded separately. However, apparatus 100 or picture decoder 400, in accordance with one or more exemplary embodiments, uses both a mode in which information about a motion vector predictor is not separately encoded and a mode in which information about a motion vector predictor is coded to more accurately predict a motion vector, which will be described in detail at this point.(1) Explicit Mode [00080] One of the methods of encoding a motion vector predictor, which can be selected by the predictor 910, may implement an explicit way of encoding information about a motion vector predictor of a current block. This explicit mode is a mode that calculates at least one candidate motion vector predictor and separately encodes information indicating which motion vector predictor is used to predict a motion vector of a current block. Candidates for the motion vector predictor, in accordance with one or more exemplary embodiments, will now be described with reference to Figures 10A, 10B, and 11A through 11C. [00081] Figures 10A and 10B illustrate motion vector predictor candidates in an explicit manner in accordance with one or more exemplary embodiments. Referring to Figure 10A , a motion vector prediction method, according to an exemplary embodiment, may use one of the motion vectors of previously encoded blocks adjacent to a current block as a motion vector predictor of the current block. A left a0 block between blocks adjacent to an upper side of the current block, an upper b0 block between blocks adjacent to a left side of it, a c block adjacent to an upper right side of it, a d block adjacent to a upper left side of it, and a block and adjacent to a lower right side of it can be used for motion vector predictors of the current block. [00082] Referring to Figure 10B, motion vectors of all blocks adjacent to a current block can be used as motion vector predictors of the current block. In other words, motion vectors not only from one block a0 to the left between blocks adjacent to an upper side of the current block, but from all blocks adjacent to the upper side of the same block can be used as motion vector predictors of the current block. . Furthermore, motion vectors not only of a top b0 block between blocks adjacent to a left side of it, but of all blocks adjacent to the left side of it can be used as motion vector predictors of the current block. [00083] Alternatively, an average value of motion vectors from adjacent blocks can be used as a motion vector predictor. For example, average value (mv_a0, mv_b0, mv_c) can be used as a motion vector predictor of the current block, where mv_ a0 indicates a motion vector of block a0, mv_ b0 indicates a motion vector of block b0, and mv_c indicates a motion vector of block c. [00084] Figures 11A to 11C illustrate motion vector predictor candidates in an explicit manner according to another exemplary embodiment. Figure 11A illustrates a method that calculates a motion vector predictor of a Bi-Directional Predictive Frame (referred to as a B-frame) in accordance with an exemplary embodiment. When a current frame including a current block is a B frame, where bidirectional prediction is performed, a motion vector generated based on a temporal distance can be a motion vector predictor. [00085] Referring to Figure 11A, a motion vector predictor of a current block 1100 of a current frame 1110 can be generated using a motion vector of a block 1120 at a co-located position of a temporally preceding frame 1112. For example, if a motion vector mv_colA of block 1120 at a position co-located with current block 1100 is generated for a fetched block 1122 of a temporally subsequent frame 1114 of current frame 1110, motion vector predictor candidates mv_L0A and mv_L1A of current block 1100 can be generated according to the equations below: mv_L1A = (t1/t2) x mv_colA mv L0A = mv L1A-mv colA where mv_L0A indicates a motion vector predictor of current block 1100 for the temporally preceding frame 1112, and mv_L1A indicates a motion vector predictor from the current block 1100 to the temporally subsequent frame 1114. [00086] Figure 11B illustrates a method of generating a motion vector predictor of a B frame according to another exemplary embodiment. In comparison to the method illustrated in Figure 11A, a block 1130 at a position co-located with the current block 1100 exists in the temporally subsequent frame 1114 in Figure 11B. [00087] Referring to Figure 11B, a motion vector predictor of current block 1100 of current frame 1110 can be generated using a motion vector of block 1130 at a co-located position of temporally subsequent frame 1114. For example, if a motion vector mv_colB of block 1130 at a position co-located with current block 1100 is generated for a fetched block 1132 of temporally preceding frame 1112 of current frame 1110, motion vector predictor candidates mv_L0B and mv_L1B of current block 1100 can be generated according to the equations below: mv_L0B = (t3/t4) x mv_colBmv_L1B = mv_L0B-mv_colBwherein mv_L0B indicates a motion vector predictor from the current block 1100 to the temporally preceding frame 1112, and mv_L1B indicates a predictor of motion vector from current block 1100 to temporally subsequent frame 1114. [00088] In generating a motion vector of the current block 1100 of a B frame, at least one of the methods illustrated in Figures 11A and 11B can be used. In other words, since a motion vector predictor is generated using a motion vector and a time distance from block 1120 or 1130 at a position co-located with the current block 1100, motion vector predictors can be generated using the methods illustrated in Figures 11A and 11B if motion vectors of blocks 1120 and 1130 at the co-located position exist. Thus, the predictor 910, according to an exemplary embodiment, can generate a motion vector predictor of the current block 1100 using only one block with a motion vector between blocks 1120 and 1130 at the co-located position. [00089] For example, when block 1120 at a co-located position of the temporally preceding frame 1112 is encoded using intra prediction rather than inter prediction, a motion vector of block 1120 does not exist, and therefore a vector predictor motion of current block 1100 cannot be generated using the method of generating a motion vector predictor as illustrated in Figure 11A. [00090] Figure 11C illustrates a method of generating a motion vector predictor of a B frame according to an exemplary embodiment. Referring to Figure 11C , a motion vector predictor of the current block 1100 of the current frame 1110 can be generated using a motion vector of a block 1140 at a co-located position of the temporally preceding frame 1112. For example, if a vector of motion mv_colC of block 1130 at a position co-located with current block 1100 is generated for a fetched block 1142 of another temporally preceding frame 1116, a motion vector predictor candidate mv_L0C of current block 1100 can be generated according to the equation below : mv_L0C = (t6/t5) x mv_colC. [00091] Since the current frame 1110 is a P frame, the number of motion vector predictors of the current block 1100 is 1, unlike in Figures 11A and 11B. [00092] In summary, a set C of candidate motion vector predictor according to Figures 10A, 10B and 11A to 11C can be generated according to the equation below: C = {average value (mv_a0, mv_b0, mv_c ), mv_a0, mv_a1 ..., mv_aN, mv_b0, mv_b1, ... , mv_bN, mv_c, mv_d, mv_e, mv_temporal}. [00093] Alternatively, the set C can be generated by reducing the number of candidate motion vector predictors according to the equation below: C = {average value (mv_a', mv_b', mv_c'), mv_a', mv_b ', mv_c', mv_temporal}. [00094] Where, mv_x indicates a motion vector of a block x, mean value () indicates an average value, and mv_temporal indicates motion vector predictor candidates generated using a temporal distance described above in association with Figures 11A through 11C. [00095] Also, mv_a' indicates a valid first motion vector among mv_a0, mv_a1..., mv_aN. For example, when a block a0 is coded using intra prediction, a motion vector mv_a0 from block a0 is not valid, so mv_a' = mv_a1, and if a motion vector from block a1 is also not valid, mv_a' = mv_a2. [00096] Likewise, mv_b' indicates a valid first motion vector between mv_b0, mv_b1..., mv_bN, and mv_c' indicates a valid first motion vector between mv_c, mv_d and mv_e. [00097] Explicit mode is a mode of encoding information indicating which motion vector was used for a motion vector predictor of a current block. For example, when a motion vector is encoded in explicit mode, a binary number can be allocated to each of the elements of the set C, i.e. candidates the motion vector predictor, and if one of the candidates is used as a predictor motion vector of a current block, a corresponding binary number can be output. [00098] It will be readily understood by those skilled in the art that other motion vector predictor candidates than those described above, in association with the explicit mode, may be used.(2) Implicit Mode [00099] Another method of encoding a motion vector predictor, which may be selected by the predictor 910, implements an information encoding mode indicating that a motion vector predictor of a current block is generated based on blocks or pixels included in a previously coded area adjacent to the current block. Unlike the explicit mode, this mode is an information encoding mode indicating the generation of a motion vector predictor in implicit mode without encoding the information to specify a motion vector predictor. [000100] As described above, a codec such as 4 H.264/MPEG-4 AVC MPEG uses motion vectors from previously encoded blocks adjacent to a current block to predict a motion vector of the current block. That is, an average value of motion vectors from previously encoded blocks adjacent to the left, upper, and upper right sides of the current block is used as a motion vector predictor of the current block. In this case, unlike the explicit mode, the information to select one of the motion vector predictor candidates cannot be encoded. [000101] In other words, if only information indicating that a motion vector predictor of a current block has been encoded in implicit mode is encoded in an image encoding process, an average value of motion vectors from adjacent previously encoded blocks to the left, top and top right sides of the current block can be used as a motion vector predictor of the current block in an image decoding process. [000102] Furthermore, an image encoding method, according to an exemplary embodiment, provides a new implicit mode in addition to the method that uses an average value of motion vectors of previously encoded blocks adjacent to the left, top, and top right sides of a current block as a motion vector predictor of the current block. This will be described in detail at this point with reference to Figure 12. [000103] Figure 12 illustrates a method of generating a motion vector predictor in an implicit mode according to an exemplary embodiment. Referring to Figure 12 , pixels 1222 included in a previously encoded area 1220 adjacent to a current block 1200 of a current frame 1210 are used to generate a motion vector predictor of the current block 1200. Corresponding pixels 1224 are determined by searching for a reference frame 1212 using adjacent pixels 1222. Corresponding pixels 1224 can be determined by calculating a Sum of Absolute Differences (SAD). When the corresponding pixels 1224 are determined, a motion vector mv_model of adjacent pixels 1222 is generated, and the motion vector mv_model can be used as a motion vector predictor of the current block 1200. [000104] If a mode that uses an average value of motion vectors from adjacent blocks as a motion vector predictor is set to "implicit mode _1", and if a mode that generates a motion vector predictor using pixels adjacent to a current block is defined as "implicit mode _2", a motion vector predictor can be generated using one of the two implicit modes, implicit mode _1 and implicit mode_2, encoding information about one of the two implicit modes in a coding process of image and With reference to information about a mode in an image decoding process.(3) Mode Selection [000105] There can be multiple criteria for the 910 predictor to select one of the explicit and implicit modes described above. [000106] Once one of a plurality of candidate motion vector predictors is selected in explicit mode, a motion vector predictor more similar to a motion vector of a current block can be selected. However, once information indicating one of a plurality of motion vector predictor candidates is encoded, an overhead greater than in implicit modes can occur. Thus, for a coding unit with a larger size, a motion vector can be encoded in explicit mode because an increased probability of an error occurring when a motion vector is erroneously predicted is greater for a coding unit with a size greater than for an encoding unit with a smaller size, and the number of times the overhead occurs decreases for each frame. [000107] For example, when a frame divided evenly into encoding units m with a size of 64x64 is encoded in explicit mode, the number of times that overhead occurs is m. However, when a frame of the same size divided evenly into 4m encoding units with the size of 32x32 is encoded in explicit mode, the number of times that overhead occurs is 4m. [000108] Thus, the predictor 910, according to an exemplary embodiment, may select one of the explicit and implicit modes based on the size of an encoding unit when a motion vector of a current block is encoded. [000109] Since the size of an encoding unit in the image encoding method and the image decoding method, according to exemplary embodiments described above with reference to Figures 1 to 8, is represented using a depth, the predictor 910 determines based on a depth of a current block whether a motion vector of the current block is encoded in explicit or implicit mode. For example, when encoding units whose depths are 0 and 1 are inter-predicted, motion vectors of encoding units are encoded in explicit mode, and when encoding units whose depths are equal to or greater than 2 are inter-predicted, Motion vectors of encoding units are encoded in implicit mode. [000110] According to another exemplary embodiment, the predictor 910 can select explicit or implicit mode for each frame unit or module. Since the image characteristics are different for each frame unit or module, the explicit or implicit mode can be selected for each frame unit or module considering these image characteristics. Motion vectors from encoding units included in a current frame or module can be prediction encoded by selecting an optimal mode between explicit and implicit mode, taking into account the R-D cost. [000111] For example, if motion vectors of encoding units included in a frame or module can be predicted exactly without using the explicit mode, motion vectors of all encoding units included in the frame or module can be prediction encoded in the implicit way. [000112] According to another exemplary embodiment, the predictor 910 can select the explicit or implicit mode based on the condition of a current block being coded in the skip mode. Jump mode is an encoding mode in which the flag information indicating that a current block has been encoded in the jump mode is encoded without encoding a pixel value. [000113] Also, jump mode is a mode where a pixel value of a current block is not encoded, since a prediction block generated by performing motion compensation using a motion vector predictor such as a motion vector of the current block is similar to the current block. Thus, as a motion vector predictor is generated similarly to a motion vector of a current block, the probability of encoding the current block in jump mode is higher. Therefore, a block encoded in jump mode can be encoded in explicit mode. [000114] Referring again to Figure 9, when the predictor 910 selects one of the explicit and implicit modes, and determines a motion vector predictor according to the selected mode, the first encoder 920 and the second encoder 930 encode the information about an encoding mode and a motion vector. [000115] Specifically, the first encoder 920 encodes information about a motion vector predictor of a current block. In more detail, when the predictor 910 determines that a motion vector of the current block is encoded in the explicit mode, the first encoder 920 encodes the information indicating that a motion vector predictor has been generated in the explicit mode and the information indicating which candidate the predictor is. motion vector was used as the motion vector predictor of the current block. [000116] In contrast, when the predictor 910 determines that the motion vector of the current block is encoded in the implicit mode, the first encoder 920 encodes the information indicating that the motion vector of the current block predictor was generated in the implicit mode. In other words, the first encoder 920 encodes information indicating that the motion vector predictor of the current block was generated using blocks or pixels adjacent to the current block. If two or more implicit modes are used, the first encoder 920 may additionally encode information indicating which implicit mode was used to generate the motion vector predictor of the current block. [000117] The second encoder 930 encodes a motion vector of a current block based on a motion vector predictor determined by the predictor 910. Alternatively, the second encoder 930 generates a difference vector by subtracting the motion vector predictor generated by the motion vector predictor 910 of the current block generated as a result of motion compensation, and encodes the difference vector information. [000118] Figure 13 is a block diagram of an apparatus 1300 for decoding a motion vector in accordance with an exemplary embodiment. The motion vector decoding apparatus 1300 may be included in the image decoding apparatus 200 described above with reference to Figure 2 or the image decoder 500 described above with reference to Figure 5. With reference to Figure 13, a decoding apparatus motion vector array 1300 includes a first decoder 1310, a second decoder 1320, a predictor 1330, and a motion vector restorer 1340. [000119] The first decoder 1310 decodes information about a motion vector predictor of a current block, which is included in a continuous stream of data. In more detail, the first decoder 1310 decodes information indicating whether the motion vector predictor of the current block was encoded in explicit or implicit mode. When the motion vector predictor of the current block is encoded in the explicit mode, the first decoder 1310 further decodes the information indicating a motion vector predictor used as the motion vector predictor of the current block among a plurality of motion vector predictors. movement. When the motion vector predictor of the current block is encoded in the implicit mode, the first decoder 1310 may further decode information indicating which of a plurality of implicit modes was used to encode the motion vector predictor of the current block. [000120] The second decoder 1320 decodes a difference vector between a motion vector and the motion vector predictor of the current block included in the streaming data. [000121] Predictor 1330 generates a motion vector predictor of the current block based on information about the motion vector predictor of the current block, which was decoded by the first decoder 1310. [000122] When information about the motion vector predictor of the current block that was encoded in the explicit mode is decoded, the predictor 1330 generates a motion vector predictor among the motion vector predictor candidates described above with reference to the Figures 10A, 10B, and 11A to 11C, and uses the generated motion vector predictor as the motion vector predictor of the current block. [000123] When information about the current block motion vector predictor that was encoded in implicit mode is decoded, the predictor 1330 generates the current block motion vector predictor using blocks or pixels included in a previously encoded area adjacent to the current block. current block. In more detail, predictor 1330 generates an average value of motion vectors of blocks adjacent to the current block as the current block motion vector predictor or generates the current block motion vector predictor by searching for a reference frame using pixels adjacent to the current block. [000124] Motion vector restorer 1340 restores a motion vector of the current block by summing the motion vector predictor generated by the predictor 1330 with the difference vector decoded by the second decoder 1320. The restored motion vector is used for offset compensation. current block movement. [000125] Figure 14 is a flowchart of a motion vector encoding method according to an exemplary embodiment. Referring to Figure 14 , a motion vector encoding apparatus 900, according to an exemplary embodiment, selects one explicitly and implicitly as a mode of encoding information about a motion vector predictor in operation 1410. [000126] The explicit mode is an information encoding mode indicating a candidate motion vector predictor among at least one candidate motion vector predictor as per information about a motion vector predictor, and the implicit mode is a information encoding mode indicating that a motion vector predictor was generated based on blocks or pixels included in a previously encoded area adjacent to a current block as per the motion vector predictor information. Detailed descriptions thereof have been given above with reference to Figures 10A, 10B, 11A to 11C and 12. [000127] A mode can be selected based on the size of a current block, ie a depth of the current block, or selected in a unit of a current frame or module in which the current block is included. Alternatively, a mode can be selected according to the condition that the current block is encoded in a skip mode. [000128] In operation 1420, motion vector encoding apparatus 900 determines a motion vector predictor according to the mode selected in operation 1410. In detail, motion vector encoding apparatus 900 determines a vector predictor of the current block based on the explicit or implicit mode selected in operation 1410. In more detail, the motion vector encoding apparatus 900 determines a motion vector predictor candidate from among at least one motion vector predictor candidate as the motion vector predictor of the current block in explicit mode, or determines the motion vector predictor of the current block based on blocks or pixels adjacent to the current block in implicit mode. [000129] In operation 1430, motion vector encoding apparatus 900 encodes information about the motion vector predictor determined in operation 1420. In case of explicit mode, motion vector encoding apparatus 900 encodes information indicating a candidate motion vector predictor between at least one candidate motion vector predictor as the motion vector predictor of the current block and information indicating that information about the motion vector predictor of the current block has been encoded in mode explicit. In the case of the implicit mode, motion vector encoding apparatus 900 encodes information indicating that the motion vector predictor of the current block has been generated based on blocks or pixels included in a previously encoded area adjacent to the current block. In the case of a plurality of implicit modes, motion vector encoding apparatus 900 may further encode information indicating one of the plurality of implicit modes. [000130] In operation 1440, motion vector encoding apparatus 900 encodes a difference vector generated by subtracting the motion vector predictor determined in operation 1420 from a motion vector of the current block. [000131] Figure 15 is a flowchart of a motion vector decoding method according to an exemplary embodiment. Referring to Figure 15 , a motion vector decoding apparatus 1300, according to an exemplary embodiment, decodes information about a motion vector predictor of a current block, which is included in a continuous stream of data, in the operation 1510. In detail, motion vector decoding apparatus 1300 decodes information about a mode used to encode the motion vector predictor of the current block between an explicit and an implicit mode. [000132] In the case of the explicit mode, the motion vector decoding apparatus 1300 decodes the information indicating that the motion vector predictor of the current block has been encoded in the explicit mode and the information about a candidate motion vector predictor between at least one candidate motion vector predictor. In the case of implicit mode, motion vector decoding apparatus 1300 decodes information indicating that the motion vector predictor of the current block was generated based on blocks or pixels included in a previously decoded area adjacent to the current block. In the case of a plurality of implicit modes, motion vector decoding apparatus 1300 may further decode information indicating one of the plurality of implicit modes. [000133] In operation 1520, motion vector decoding apparatus 1300 decodes information about a difference vector. Difference vector is a vector of a difference between the current block motion vector predictor and a current block motion vector. [000134] In operation 1530, motion vector decoding apparatus 1300 generates the motion vector predictor of the current block based on information about the motion vector predictor, which was decoded in operation 1510. In detail, the Motion vector decoding apparatus 1300 generates motion vector predictor of current block according to explicit or implicit mode. In more detail, motion vector decoding apparatus 1300 generates the motion vector predictor of the current block by selecting a motion vector predictor candidate from among at least one motion vector predictor candidate or using blocks or pixels included in a previously decoded area adjacent to the current block. [000135] In operation 1540, motion vector decoding apparatus 1300 restores the motion vector of the current block by summing the difference vector decoded in operation 1520 with the motion vector predictor generated in operation 1530. [000136] While exemplary embodiments have been particularly shown and described above, it will be understood by those skilled in the art that various changes in form and detail may be made in such embodiments without departing from the spirit and scope of the present inventive concept as defined by the claims Next. [000137] Furthermore, a system according to an exemplary embodiment may be implemented using a computer readable code on a computer readable recording medium. For example, at least one of an image encoding apparatus 100, an image decoding apparatus 200, an image encoder 400, an image decoder 500, a motion vector encoding apparatus 900 and a motion vector decoding 1300, according to exemplary embodiments, may include a bus coupled to units of each of the devices shown in Figures 1, 2, 4, 5, 9 and 13 and at least one processor connected to the bus. In addition, a memory coupled to at least one processor to execute commands, as described above, can be included and connected to the bus to store the commands and messages received or messages generated. [000138] Computer readable recording medium is any data storage device that can store data which can later be read by a computer system. Examples of computer readable recording media include read-only memory (ROM), random access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer-readable recording medium may also be distributed over a network coupled with computer systems, so that the computer-readable code is stored and executed in a distributed fashion.
权利要求:
Claims (2) [0001] 1. METHOD TO DECODE AN IMAGE, the method characterized by the fact that it comprises: obtaining a prediction mode information of a current block of a bit stream; determining motion vector predictor candidates among motion vectors of adjacent blocks, adjacent to the current block; edetermine a motion vector predictor of the current block from among the motion vector predictor candidates based on the current block prediction mode information, wherein the adjacent blocks comprise a first block outside the current block located adjacent and to the left of a leftmost block among blocks adjacent to a lower side of the current block and located adjacent and below a lowermost block among blocks adjacent to a left side of the current block. [0002] 2. METHOD TO DECODE AN IMAGE, characterized by comprising: obtaining a prediction mode information from a current block of a data stream; when the prediction mode information indicates that the prediction mode of the current block is interpredicting, determining candidates for motion vector predictor among motion vectors of neighboring blocks adjacent to the current block; and determine a motion vector predictor of the current block from among the motion vector predictor candidates; wherein the adjacent blocks comprise a lower left block located on a lower left side of the current block, wherein the image is hierarchically divided from a plurality of maximum coding units according to information about a maximum size of a coding unit in depth coding units coded according to depths, where a coding unit of a current depth is one of rectangular data units divided by a higher-depth encoding unit, wherein the current-depth encoding unit is divided into lower-depth encoding units, independently of neighboring encoding units, and wherein encoding units of a hierarchical structure comprise encoded encoding units among the split coding units of a maximum encoding unit.
类似技术:
公开号 | 公开日 | 专利标题 BR112012001514B1|2022-02-01|Encoding an image method CA2828001C|2014-07-08|Method and apparatus for encodng and decoding motion vector BR122015013887B1|2021-08-10|METHOD TO DECODE AN IMAGE AU2015203854B2|2016-05-19|Method and apparatus for encoding and decoding motion vector based on reduced motion vector predictor candidates AU2015201035B2|2016-04-28|Method and apparatus for encoding and decoding motion vector AU2014202991B2|2015-08-13|Method and apparatus for encoding and decoding motion vector
同族专利:
公开号 | 公开日 EP2677749A2|2013-12-25| EP2677749B1|2019-10-02| JP2013219824A|2013-10-24| CN104506863B|2016-11-02| CA2820905A1|2011-02-17| CA2820553C|2015-10-13| PT2677749T|2019-12-17| CN104539952A|2015-04-22| JP5856268B2|2016-02-09| ES2751356T3|2020-03-31| TWI432033B|2014-03-21| MY152395A|2014-09-15| EP2677749A3|2014-03-12| TW201334556A|2013-08-16| JP2013502141A|2013-01-17| JP5624179B2|2014-11-12| AU2010283121A1|2012-01-12| JP2013219826A|2013-10-24| JP5571827B2|2014-08-13| US20130279594A1|2013-10-24| RU2559737C2|2015-08-10| BR122020006091B1|2022-01-18| US8787463B2|2014-07-22| US20140334550A1|2014-11-13| EP2677752B1|2019-10-09| US8811488B2|2014-08-19| WO2011019247A3|2011-04-21| CA2877229A1|2011-02-17| US20140016705A1|2014-01-16| BR122013019016A2|2016-04-05| EP2928186A3|2016-01-13| RU2488972C1|2013-07-27| TW201349876A|2013-12-01| TW201349875A|2013-12-01| PL2928186T3|2020-01-31| JP5756106B2|2015-07-29| US9883186B2|2018-01-30| CA2820901A1|2011-02-17| TWI462593B|2014-11-21| US20170085882A1|2017-03-23| EP2677753A3|2014-03-12| JP5624178B2|2014-11-12| TWI526048B|2016-03-11| RU2013112250A|2014-10-20| PL2677752T3|2020-01-31| EP2677753A2|2013-12-25| CN102474610A|2012-05-23| RU2559740C2|2015-08-10| MY154795A|2015-07-31| US8472525B2|2013-06-25| US8792558B2|2014-07-29| AU2010283121B2|2013-10-17| MY168324A|2018-10-30| EP2452500A2|2012-05-16| CN103260029B|2017-04-12| US20180139448A1|2018-05-17| CN102474610B|2016-01-20| US8311118B2|2012-11-13| DK2677753T3|2019-10-14| BR122013019017A2|2016-04-05| CN103313052A|2013-09-18| TWI468019B|2015-01-01| EP2928186B1|2019-10-02| ZA201201158B|2015-08-26| TW201406165A|2014-02-01| CN103260029A|2013-08-21| HRP20191791T1|2019-12-27| MX2011013557A|2012-02-21| MY155891A|2015-12-15| CA2820905C|2018-09-25| TW201334554A|2013-08-16| KR20110017301A|2011-02-21| US20130058415A1|2013-03-07| DK2677749T3|2019-10-14| TWI424749B|2014-01-21| CA2877229C|2016-05-10| BR122013019018B1|2022-01-18| JP2015029335A|2015-02-12| TW201334555A|2013-08-16| US8537897B2|2013-09-17| CN104539952B|2017-07-21| JP2015019420A|2015-01-29| PL2677749T3|2020-01-31| RU2608264C2|2017-01-17| RU2597521C1|2016-09-10| US9544588B2|2017-01-10| US10110902B2|2018-10-23| RU2559738C2|2015-08-10| EP2452500A4|2014-03-12| CA2768182A1|2011-02-17| EP2928186A2|2015-10-07| ES2751523T3|2020-03-31| DK2677752T3|2019-10-21| ES2751334T3|2020-03-31| CN104506863A|2015-04-08| US20130279593A1|2013-10-24| BR122013019017B1|2021-06-01| CA2768182C|2016-12-13| CA2877202A1|2011-02-17| TWI547150B|2016-08-21| US8369410B2|2013-02-05| EP2677752A3|2014-03-12| ES2751973T3|2020-04-02| EP2677753B1|2019-10-02| DK2928186T3|2019-10-14| TW201114264A|2011-04-16| KR101452859B1|2014-10-23| LT2677749T|2019-10-25| BR112012001514A2|2016-03-15| ZA201503389B|2017-11-29| CN103313052B|2017-05-10| WO2011019247A2|2011-02-17| BR122013019016B1|2022-01-18| RU2013112365A|2014-10-20| JP5856267B2|2016-02-09| SI2677749T1|2019-11-29| JP2013219825A|2013-10-24| TWI413417B|2013-10-21| US20120281764A1|2012-11-08| CN103260031A|2013-08-21| EP2677752A2|2013-12-25| CN103260031B|2017-03-01| US20110038420A1|2011-02-17| PL2677753T3|2020-01-31| RU2013112367A|2014-10-20| BR122013019018A2|2016-04-05| CA2820553A1|2013-10-17| US20120147966A1|2012-06-14| RU2015120764A|2016-12-20| CA2877202C|2017-06-20| CA2820901C|2016-05-10|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5400076A|1991-11-30|1995-03-21|Sony Corporation|Compressed motion picture signal expander with error concealment| ES2431289T3|1993-03-24|2013-11-25|Sony Corporation|Image signal decoding method and associated device| CN100518319C|1996-12-18|2009-07-22|汤姆森消费电子有限公司|Fixed-length block data compression and decompression method| JPH10178639A|1996-12-19|1998-06-30|Matsushita Electric Ind Co Ltd|Image codec part and image data encoding method| US6005980A|1997-03-07|1999-12-21|General Instrument Corporation|Motion estimation and compensation of video object planes for interlaced digital video| KR100249223B1|1997-09-12|2000-03-15|구자홍|Method for motion vector coding of mpeg-4| JPH11146367A|1997-11-06|1999-05-28|Matsushita Electric Ind Co Ltd|Mobile video-phone| US6609501B2|2001-04-12|2003-08-26|Walbro Corporation|Fuel pressure regulation system| ES2545177T3|2001-11-21|2015-09-09|Google Technology Holdings LLC|Adaptive macroblock level frame / field coding for digital video content| US6980596B2|2001-11-27|2005-12-27|General Instrument Corporation|Macroblock level adaptive frame/field coding for digital video content| GB0203071D0|2002-02-09|2002-03-27|Lucas Industries Ltd|Control system| KR100956910B1|2002-07-02|2010-05-11|파나소닉 주식회사|Motion vector derivation method, moving picture coding method, and moving picture decoding method| JP4724351B2|2002-07-15|2011-07-13|三菱電機株式会社|Image encoding apparatus, image encoding method, image decoding apparatus, image decoding method, and communication apparatus| KR100865034B1|2002-07-18|2008-10-23|엘지전자 주식회사|Method for predicting motion vector| TWI287927B|2002-07-30|2007-10-01|Sony Corp|Memory device, signal processing device, image signal processing device and methods used thereof| US7023921B2|2002-08-06|2006-04-04|Motorola, Inc.|Method and apparatus for determining block match quality| GB0228281D0|2002-12-04|2003-01-08|Imec Inter Uni Micro Electr|Coding of motion vectors produced by wavelet-domain motion estimation| JP4536325B2|2003-02-04|2010-09-01|ソニー株式会社|Image processing apparatus and method, recording medium, and program| HU0301368A3|2003-05-20|2005-09-28|Amt Advanced Multimedia Techno|Method and equipment for compressing motion picture data| US20050013498A1|2003-07-18|2005-01-20|Microsoft Corporation|Coding of motion vector information| US8345754B2|2003-09-07|2013-01-01|Microsoft Corporation|Signaling buffer fullness| US7623574B2|2003-09-07|2009-11-24|Microsoft Corporation|Selecting between dominant and non-dominant motion vector predictor polarities| US7599438B2|2003-09-07|2009-10-06|Microsoft Corporation|Motion vector block pattern coding and decoding| US7724827B2|2003-09-07|2010-05-25|Microsoft Corporation|Multi-layer run level encoding and decoding| KR100846780B1|2003-11-10|2008-07-16|삼성전자주식회사|Motion vector derivation method and apparatus| US7535515B2|2003-12-23|2009-05-19|Ravi Ananthapur Bacche|Motion detection in video signals| US7469070B2|2004-02-09|2008-12-23|Lsi Corporation|Method for selection of contexts for arithmetic coding of reference picture and motion vector residual bitstream syntax elements| DE102004009792B3|2004-02-28|2005-09-22|Daimlerchrysler Ag|Fuel supply device for supplying the injectors to the combustion chambers of an internal combustion engine with fuel| DE102004013307B4|2004-03-17|2012-12-06|Robert Bosch Gmbh|High-pressure fuel pump with a pressure relief valve| CN100385957C|2004-05-21|2008-04-30|中国科学院计算技术研究所|Kinematic vector predicting method| JP4699460B2|2004-07-20|2011-06-08|クゥアルコム・インコーポレイテッド|Method and apparatus for motion vector prediction in temporal video compression| JP2006174415A|2004-11-19|2006-06-29|Ntt Docomo Inc|Image decoding apparatus, image decoding program, image decoding method, image encoding apparatus, image encoding program, and image encoding method| US7970219B2|2004-12-30|2011-06-28|Samsung Electronics Co., Ltd.|Color image encoding and decoding method and apparatus using a correlation between chrominance components| WO2006106039A1|2005-04-06|2006-10-12|Thomson Licensing|Method and apparatus for encoding enhancement layer video data| WO2006108917A1|2005-04-13|2006-10-19|Nokia Corporation|Coding, storage and signalling of scalability information| JP4519723B2|2005-06-27|2010-08-04|富士通セミコンダクター株式会社|Encoding or decoding apparatus for moving image data using motion vectors| US8279918B2|2005-07-15|2012-10-02|Utc Fire & Security Americas Corporation, Inc.|Method and apparatus for motion compensated temporal filtering using residual signal clipping| US20070033385A1|2005-08-02|2007-02-08|Advanced Micro Devices, Inc.|Call return stack way prediction repair| JP2007081474A|2005-09-09|2007-03-29|Toshiba Corp|Image coding apparatus and method| KR101349599B1|2005-09-26|2014-01-10|미쓰비시덴키 가부시키가이샤|Dynamic image decoding device| US20070179854A1|2006-01-30|2007-08-02|M-Systems|Media predictive consignment| TW200743371A|2006-05-12|2007-11-16|Matsushita Electric Ind Co Ltd|Image processing apparatus, method and integrated circuit| JP4760552B2|2006-06-06|2011-08-31|ソニー株式会社|Motion vector decoding method and decoding apparatus| JP2007329258A|2006-06-07|2007-12-20|Fujitsu Ltd|Semiconductor device manufacturing method| JP4712643B2|2006-08-17|2011-06-29|富士通セミコンダクター株式会社|Inter-frame prediction processing apparatus, inter-frame prediction method, image encoding apparatus, and image decoding apparatus| JP4763549B2|2006-08-18|2011-08-31|富士通セミコンダクター株式会社|Inter-frame prediction processing apparatus, image encoding apparatus, and image decoding apparatus| CN101507280B|2006-08-25|2012-12-26|汤姆逊许可公司|Methods and apparatus for reduced resolution partitioning| US20080082058A1|2006-10-02|2008-04-03|Wallach Gary S|Debriding Callus Foot Tissue| EP2080383A4|2006-10-20|2009-12-09|Nokia Corp|Generic indication of adaptation paths for scalable multimedia| KR101383540B1|2007-01-03|2014-04-09|삼성전자주식회사|Method of estimating motion vector using multiple motion vector predictors, apparatus, encoder, decoder and decoding method| KR101356734B1|2007-01-03|2014-02-05|삼성전자주식회사|Method and apparatus for video encoding, and method and apparatus for video decoding using motion vector tracking| KR101365567B1|2007-01-04|2014-02-20|삼성전자주식회사|Method and apparatus for prediction video encoding, and method and apparatus for prediction video decoding| KR100846802B1|2007-02-14|2008-07-16|삼성전자주식회사|Method of decoding motion picture frame and method of encoding the same| US20080240242A1|2007-03-27|2008-10-02|Nokia Corporation|Method and system for motion vector predictions| CA2680140A1|2007-04-16|2008-11-06|Kabushiki Kaisha Toshiba|Image encoding and decoding method and apparatus| JP2008283490A|2007-05-10|2008-11-20|Ntt Docomo Inc|Moving image encoding device, method and program, and moving image decoding device, method and program| JP2008311781A|2007-06-12|2008-12-25|Ntt Docomo Inc|Motion picture encoder, motion picture decoder, motion picture encoding method, motion picture decoding method, motion picture encoding program and motion picture decoding program| KR101360279B1|2007-06-28|2014-02-13|광주과학기술원|Method and apparatus for sharing motion information using global disparity estimation by macroblock unit, and method and apparatus for encoding/decoding multi-view video image using it| JP2010287917A|2007-09-26|2010-12-24|Toshiba Corp|Image encoding/decoding device and method| CN101415122B|2007-10-15|2011-11-16|华为技术有限公司|Forecasting encoding/decoding method and apparatus between frames| EP2061004A1|2007-11-14|2009-05-20|Sony Corporation|Object boundary accurate motion detection using hierarchical block splitting and motion segmentation| US8908765B2|2007-11-15|2014-12-09|General Instrument Corporation|Method and apparatus for performing motion estimation| EP2081386A1|2008-01-18|2009-07-22|Panasonic Corporation|High precision edge prediction for intracoding| US8204336B2|2008-07-16|2012-06-19|Panasonic Corporation|Removing noise by adding the input image to a reference image| US8503527B2|2008-10-03|2013-08-06|Qualcomm Incorporated|Video coding with large macroblocks| KR101452859B1|2009-08-13|2014-10-23|삼성전자주식회사|Method and apparatus for encoding and decoding motion vector| US9237355B2|2010-02-19|2016-01-12|Qualcomm Incorporated|Adaptive motion resolution for video coding| US9083974B2|2010-05-17|2015-07-14|Lg Electronics Inc.|Intra prediction modes| JP5833429B2|2011-12-20|2015-12-16|スタンレー電気株式会社|Semiconductor manufacturing equipment| JP5852187B2|2014-07-16|2016-02-03|興和株式会社|Perimeter| JP2014195741A|2014-07-16|2014-10-16|パナソニック株式会社|Drum type washing machine|US8503527B2|2008-10-03|2013-08-06|Qualcomm Incorporated|Video coding with large macroblocks| KR101474756B1|2009-08-13|2014-12-19|삼성전자주식회사|Method and apparatus for encoding and decoding image using large transform unit| KR101452859B1|2009-08-13|2014-10-23|삼성전자주식회사|Method and apparatus for encoding and decoding motion vector| KR101456498B1|2009-08-14|2014-10-31|삼성전자주식회사|Method and apparatus for video encoding considering scanning order of coding units with hierarchical structure, and method and apparatus for video decoding considering scanning order of coding units with hierarchical structure| KR101487686B1|2009-08-14|2015-01-30|삼성전자주식회사|Method and apparatus for video encoding, and method and apparatus for video decoding| AU2015201035B2|2010-01-14|2016-04-28|Samsung Electronics Co., Ltd.|Method and apparatus for encoding and decoding motion vector| KR101522850B1|2010-01-14|2015-05-26|삼성전자주식회사|Method and apparatus for encoding/decoding motion vector| EP3913923A1|2010-01-19|2021-11-24|Samsung Electronics Co., Ltd.|Method and apparatus for encoding/decoding images using a motion vector of a previous block as a motion vector for the current block| KR20120090740A|2011-02-07|2012-08-17|휴맥스|Apparatuses and methods for encoding/decoding of video using filter in a precise unit| JP5368631B2|2010-04-08|2013-12-18|株式会社東芝|Image encoding method, apparatus, and program| WO2011146451A1|2010-05-20|2011-11-24|Thomson Licensing|Methods and apparatus for adaptive motion vector candidate ordering for video encoding and decoding| PL2613535T3|2010-09-02|2021-10-25|Lg Electronics Inc.|Method for encoding and decoding video| CN108668137A|2010-09-27|2018-10-16|Lg 电子株式会社|Method for dividing block and decoding device| KR101825768B1|2010-11-24|2018-02-05|벨로스 미디어 인터내셔널 리미티드|Motion vector calculation method, image coding method, image decoding method, motion vector calculation device and image coding/decoding device| JP2014501091A|2010-12-17|2014-01-16|エレクトロニクスアンドテレコミュニケーションズリサーチインスチチュート|Inter prediction method and apparatus| CN106851306B|2011-01-12|2020-08-04|太阳专利托管公司|Moving picture decoding method and moving picture decoding device| EP2675167B1|2011-02-10|2018-06-20|Sun Patent Trust|Moving picture encoding method, moving picture encoding device, moving picture decoding method, moving picture decoding device, and moving picture encoding decoding device| US10404998B2|2011-02-22|2019-09-03|Sun Patent Trust|Moving picture coding method, moving picture coding apparatus, moving picture decoding method, and moving picture decoding apparatus| MX2013009864A|2011-03-03|2013-10-25|Panasonic Corp|Video image encoding method, video image decoding method, video image encoding device, video image decoding device, and video image encoding/decoding device.| KR101543138B1|2011-03-09|2015-08-07|가부시끼가이샤 도시바|Video image encoding method and video image decoding method| US9648334B2|2011-03-21|2017-05-09|Qualcomm Incorporated|Bi-predictive merge mode based on uni-predictive neighbors in video coding| CN107948657B|2011-03-21|2021-05-04|Lg 电子株式会社|Method of selecting motion vector predictor and apparatus using the same| MX2013010231A|2011-04-12|2013-10-25|Panasonic Corp|Motion-video encoding method, motion-video encoding apparatus, motion-video decoding method, motion-video decoding apparatus, and motion-video encoding/decoding apparatus.| KR20120118780A|2011-04-19|2012-10-29|삼성전자주식회사|Method and apparatus for encoding and decoding motion vector of multi-view video| PL2717573T3|2011-05-24|2018-09-28|Velos Media International Limited|Image encoding method, image encoding apparatus, image decoding method, image decoding apparatus, and image encoding/decoding apparatus| MX2013012132A|2011-05-27|2013-10-30|Panasonic Corp|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device.| US9485518B2|2011-05-27|2016-11-01|Sun Patent Trust|Decoding method and apparatus with candidate motion vectors| SG194746A1|2011-05-31|2013-12-30|Kaba Gmbh|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device| KR101889582B1|2011-05-31|2018-08-20|선 페이턴트 트러스트|Video encoding method, video encoding device, video decoding method, video decoding device, and video encoding/decoding device| US9918068B2|2011-06-15|2018-03-13|Media Tek Inc.|Method and apparatus of texture image compress in 3D video coding| US9131239B2|2011-06-20|2015-09-08|Qualcomm Incorporated|Unified merge mode and adaptive motion vector prediction mode candidates selection| WO2013001748A1|2011-06-29|2013-01-03|パナソニック株式会社|Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device| PL2728878T3|2011-06-30|2020-06-15|Sun Patent Trust|Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device| TW201306568A|2011-07-20|2013-02-01|Novatek Microelectronics Corp|Motion estimation method| MX347793B|2011-08-03|2017-05-12|Panasonic Ip Corp America|Video encoding method, video encoding apparatus, video decoding method, video decoding apparatus, and video encoding/decoding apparatus.| GB2556695B|2011-09-23|2018-11-14|Kt Corp|Method for inducing a merge candidate block and device using the same| KR101711355B1|2011-09-28|2017-02-28|가부시키가이샤 제이브이씨 켄우드|Video decoding device, video decoding method and recording medium storing video decoding program| MX2014003991A|2011-10-19|2014-05-07|Panasonic Corp|Image encoding method, image encoding device, image decoding method, and image decoding device.| KR20130050405A|2011-11-07|2013-05-16|오수미|Method for determining temporal candidate in inter prediction mode| KR20130050149A|2011-11-07|2013-05-15|오수미|Method for generating prediction block in inter prediction mode| CA2883368C|2011-11-08|2017-08-29|Samsung Electronics Co., Ltd.|Method and apparatus for motion vector determination in video encoding or decoding| KR20130058584A|2011-11-25|2013-06-04|삼성전자주식회사|Method and apparatus for encoding image, and method and apparatus for decoding image to manage buffer of decoder| US20150016530A1|2011-12-19|2015-01-15|James M. Holland|Exhaustive sub-macroblock shape candidate save and restore protocol for motion estimation| US9013549B2|2012-05-24|2015-04-21|Silicon Integrated Systems Corp.|Depth map generation for conversion of two-dimensional image data into three-dimensional image data| KR102088383B1|2013-03-15|2020-03-12|삼성전자주식회사|Method and apparatus for encoding and decoding video| US10015515B2|2013-06-21|2018-07-03|Qualcomm Incorporated|Intra prediction from a predictive block| EP3021588A4|2013-07-12|2017-12-06|Samsung Electronics Co., Ltd.|Video encoding method and apparatus therefor using modification vector inducement, video decoding method and apparatus therefor| TWI536811B|2013-12-27|2016-06-01|財團法人工業技術研究院|Method and system for image processing, decoding method, encoder and decoder| US9883197B2|2014-01-09|2018-01-30|Qualcomm Incorporated|Intra prediction of chroma blocks using the same vector| EP3355582B1|2015-09-24|2021-04-28|LG Electronics Inc.|Amvr-based image coding method and apparatus in image coding system| KR20180081716A|2015-11-13|2018-07-17|엘지전자 주식회사|Adaptive Image Prediction Method and Apparatus Using Threshold in Image Coding System| CN107046645B9|2016-02-06|2020-08-14|华为技术有限公司|Image coding and decoding method and device| JP2016106494A|2016-02-12|2016-06-16|株式会社東芝|Moving image encoding method and moving image decoding method| US10575000B2|2016-04-20|2020-02-25|Mediatek Inc.|Method and apparatus for image compression using block prediction mode| WO2018124855A1|2017-01-02|2018-07-05|한양대학교 산학협력단|Method and apparatus for decoding image using inter-picture prediction| JP6503014B2|2017-06-16|2019-04-17|株式会社東芝|Moving picture coding method and moving picture decoding method| JP6510084B2|2018-01-05|2019-05-08|株式会社東芝|Moving picture decoding method and electronic apparatus| JP6871442B2|2020-01-21|2021-05-12|株式会社東芝|Moving image coding method and moving image decoding method| JP6871447B2|2020-02-03|2021-05-12|株式会社東芝|Moving image coding method and moving image decoding method|
法律状态:
2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H04N 19/139 (2014.01), H04N 19/105 (2014.01), H04N | 2019-01-15| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-12-17| B15K| Others concerning applications: alteration of classification|Free format text: AS CLASSIFICACOES ANTERIORES ERAM: H04N 19/139 , H04N 19/105 , H04N 19/117 , H04N 19/124 , H04N 19/147 , H04N 19/159 , H04N 19/176 , H04N 19/51 , H04N 19/513 , H04N 19/52 , H04N 19/61 , H04N 19/70 , H04N 19/86 , H04N 19/96 , H04N 19/184 Ipc: H04N 19/176 (2014.01), H04N 19/52 (2014.01), H04N | 2019-12-24| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-03-16| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]| 2021-11-09| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2022-02-01| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 13/08/2010, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF, QUE DETERMINA A ALTERACAO DO PRAZO DE CONCESSAO. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 KR1020090074896A|KR101452859B1|2009-08-13|2009-08-13|Method and apparatus for encoding and decoding motion vector| KR10-2009-0074896|2009-08-13| PCT/KR2010/005365|WO2011019247A2|2009-08-13|2010-08-13|Method and apparatus for encoding/decoding motion vector|BR122020006091-3A| BR122020006091B1|2009-08-13|2010-08-13|METHOD OF DECODING AN IMAGE, METHOD OF ENCODING AN IMAGE, AND COMPUTER-READABLE NON-TRANSITORY STORAGE MEDIUM WHICH STORAGE A BIT STREAM| BR122013019017-1A| BR122013019017B1|2009-08-13|2010-08-13|APPARATUS FOR DECODING AN IMAGE| BR122013019016-3A| BR122013019016B1|2009-08-13|2010-08-13|METHOD OF DECODING AN IMAGE| BR122013019018-0A| BR122013019018B1|2009-08-13|2010-08-13|METHOD OF DECODING AN IMAGE| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|